Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions
نویسندگان
چکیده
It is well known that standard single-hidden layer feedforward networks (SLFNs) with at most N hidden neurons (including biases) can learn N distinct samples (x(i),t(i)) with zero error, and the weights connecting the input neurons and the hidden neurons can be chosen "almost" arbitrarily. However, these results have been obtained for the case when the activation function for the hidden neurons is the signum function. This paper rigorously proves that standard single-hidden layer feedforward networks (SLFNs) with at most N hidden neurons and with any bounded nonlinear activation function which has a limit at one infinity can learn N distinct samples (x(i),t(i)) with zero error. The previous method of arbitrarily choosing weights is not feasible for any SLFN. The proof of our result is constructive and thus gives a method to directly find the weights of the standard SLFNs with any such bounded nonlinear activation function as opposed to iterative training algorithms in the literature.
منابع مشابه
Approximation bounds for smooth functions in C(Rd) by neural and mixture networks
We consider the approximation of smooth multivariate functions in C(IRd) by feedforward neural networks with a single hidden layer of nonlinear ridge functions. Under certain assumptions on the smoothness of the functions being approximated and on the activation functions in the neural network, we present upper bounds on the degree of approximation achieved over the domain IRd, thereby generali...
متن کاملApproximation Bounds for Smooth Functions in C(IRd) by Neural and Mixture Networks
We consider the approximation of smooth multivariate functions in C(I R d) by feedforward neural networks with a single hidden layer of non-linear ridge functions. Under certain assumptions on the smoothness of the functions being approximated and on the activation functions in the neural network, we present upper bounds on the degree of approximation achieved over the domain IR d , thereby gen...
متن کاملVapnik-Chervonenkis Dimension of Recurrent Neural Networks
Most of the work on the Vapnik-Chervonenkis dimension of neural networks has been focused on feedforward networks. However, recurrent networks are also widely used in learning applications, in particular when time is a relevant parameter. This paper provides lower and upper bounds for the VC dimension of such networks. Several types of activation functions are discussed, including threshold, po...
متن کاملClassification ability of single hidden layer feedforward neural networks
Multilayer perceptrons with hard-limiting (signum) activation functions can form complex decision regions. It is well known that a three-layer perceptron (two hidden layers) can form arbitrary disjoint decision regions and a two-layer perceptron (one hidden layer) can form single convex decision regions. This paper further proves that single hidden layer feedforward neural networks (SLFN's) wit...
متن کاملDALD:-Distributed-Asynchronous-Local-Decontamination Algorithm in Arbitrary Graphs
Network environments always can be invaded by intruder agents. In networks where nodes are performing some computations, intruder agents might contaminate some nodes. Therefore, problem of decontaminating a network infected by intruder agents is one of the major problems in these networks. In this paper, we present a distributed asynchronous local algorithm for decontaminating a network. In mos...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- IEEE transactions on neural networks
دوره 9 1 شماره
صفحات -
تاریخ انتشار 1998